专利摘要:
The present invention provides techniques for caching the results of a read request to a solid state device. In some embodiments, the techniques may be embodied as a method of caching the solid state device read request results including receiving, at a solid state device, a data request. from a host device coupled in communication with the solid state device and recovering, using a solid state device controller, a compressed data segment from the solid state device in response to the data request. The techniques may further include decompressing the compressed data segment, returning to the host device a data segment block in response to the data request, and caching one or more additional blocks. of data segment in a data buffer for subsequent read requests.
公开号:FR3020885A1
申请号:FR1554092
申请日:2015-05-07
公开日:2015-11-13
发明作者:Lee Anton Sendelbach;Jeffrey S Werning
申请人:HGST Netherlands BV;
IPC主号:
专利说明:

[0001] SYSTEM AND METHOD FOR CACHING SOLID STATE DEVICE READING RESULT RESULTS Background [0001] In a solid state device (SSD) adapter with data compression, a plurality of logical address blocks (LBAs) may be grouped together to form a much larger unit (eg a data segment). This unit can then be exploited through a compression engine provided on the adapter and allowing the LBA blocks to take up much less space than if they were stored in their natural size. This compressed data segment is then stored on the SSD (for example on a NAND flash memory). The size reduction can be 50% or more. This means that multiple LBAs can be stored as a single unit in the flash memory. When accessing these LBAs for reading, the compressed data segment must be retrieved from the flash memory and uncompressed. Of all uncompressed LBAs, only a single LBA is potentially needed to respond to the read request. [0002] Express peripheral component interconnect (PCIe) is frequently used to link SSDs to a host system. The architecture of the PCI-Express system is limited by performance constraints. First, the typical PCI-Express structures with a large fan distribution of the device (such as a back-end of enterprise memory storage) have a lower overall uplink bandwidth (from PCI-Express upstream of the host) than the downstream bandwidth (coming from the same PCI-Express switch placed downstream of all linked storage control systems). This may have a bottleneck at a PCIe switch if the bandwidth of the downstream resources is greater than the upstream bandwidth. Such a bottleneck can delay the retrieval of read results from an SSD to a host device. SUMMARY OF THE INVENTION Techniques for caching the results of a read request to a solid state device are set forth herein. In some embodiments, the techniques may be embodied as a method of caching the solid state device read request results including receiving, at a solid state device, a request for data from a host device communicatively coupled to the solid state device and retrieving, using a solid state device controller, a compressed data segment from the state device solid in response to the data request. The techniques may further include decompressing the compressed data segment, returning to the host device a data segment block in response to the data request, and caching one or more additional blocks. of data segment in a data buffer for subsequent read requests. [0004] According to additional aspects of this exemplary embodiment, the data segment may be indicated by a logical block address. According to additional aspects of this exemplary embodiment, the data buffer may be provided in the memory of the solid state device. According to additional aspects of this exemplary embodiment, the data buffer may be provided in the memory associated with the express peripheral component interconnection (PCIe) of the hosting device. According to additional aspects of this embodiment, the techniques may include receiving a second data request from the host device, determining that the data sent in response to the second data request is contained. in the data buffer and processing the second data request from the host device using data contained in the data buffer. According to additional aspects of this exemplary embodiment, the processing of the second data request from the hosting device using the data contained in the data buffer can comprise the provision of the hosting device of a server. - 2 - burst list entry - a grouping pointing to the memory in the data buffer containing the data provided in response. [0009] According to additional aspects of this exemplary embodiment, determining that the data sent in response to the second data request is contained in the data buffer may be performed by a reader provided on the host device. According to additional aspects of this exemplary embodiment, determining that the data sent in response to the second data request is contained in the data buffer may be performed by the solid state device. [0011] According to additional aspects of this exemplary embodiment, the aggregation-clustering list may be provided from a reader provided on the hosting device. According to additional aspects of this exemplary embodiment, the techniques may further include logging one or more writes of the data on the solid state device and determining, based on the one or more requests. write records, whether or not the data contained in the data buffer is valid. According to further aspects of this exemplary embodiment, the techniques may further comprise receiving a second request for data from the host device, determining that the data sent in response to the second request for Data is contained in the data buffer and the determination, based on one or more write requests written in a log, that the data contained in the data buffer 20 is invalid. Based on the determination that these valid data provided in response are not in the buffer, the techniques may include recovering, using the solid state device controller, a second compressed data segment. from the solid state device; decompressing the second compressed data segment, and returning to the host device a block of the second data segment sent in response to the second data request. In further aspects of this exemplary embodiment, the techniques may include the use of an algorithm to maintain the data buffer. According to additional aspects of this exemplary embodiment, the algorithm may comprise at least one of an algorithm used less recently to chronologically classify the data coming out of the data buffer, an algorithm used less frequently to classify chronologically. the data coming out of the data buffer and an adaptive replacement caching algorithm to chronologically classify the data coming out of the data buffer. According to additional aspects of this embodiment, the hosting device may comprise at least one of: an enterprise server, a database server, a workstation and a computer. According to additional aspects of this exemplary embodiment, the solid state device may comprise an express peripheral component interconnection device (PCIe). Although the disclosed device is in the form of a solid state device, the embodiments may include devices that may not be solid state devices (e.g., PCIe hard disk drives). In some embodiments, the caching techniques of the solid state device read request results may be embodied as a computer program product consisting of a series of executable instructions on a computer, the computer program product performing a process of caching the solid state device read request results. The computer program can perform the steps of receiving, at a solid-state device, a request for data from a host device coupled in communication with the solid-state device, the recovery, with the aid of a device for controlling the solid state device, a compressed data segment from the solid state device in response to the data request, decompressing the compressed data segment, returning to the host device a block of a data segment in response to the data request and caching of one or more additional data segment blocks in a data buffer for subsequent read requests. In some embodiments, the caching techniques of the solid state device read request results may be embodied as a caching system of the query results of a solid state device. solid state device reading. The system may include a host device and a first Express Peripheral Component Interconnect (PCIe) device. The first Express Peripheral Component Interconnect (PCIe) device may include instructions stored in memory. The instructions may include an instruction to send one or more blocks of an uncompressed data segment in response to a first data request to a data buffer. The system may also include an Express Peripheral Component Interconnect (PCIe) switch communicatively coupling the first PCIe device and the host device, wherein the host device includes the instructions stored in memory. The instructions stored in the host device memory may include an instruction to determine whether or not the data sent in response to a second data request is contained in the data buffer and a processing instruction of the second data request from data contained in the data buffer based on a determination that the data sent in response to the second data request is contained in the data buffer. According to additional aspects of this exemplary embodiment, the data buffer may be provided in the memory of the solid state device. According to additional aspects of this exemplary embodiment, the data buffer may be provided in the memory associated with the express peripheral component interconnection (PCIe) of the hosting device. According to additional aspects of this exemplary embodiment, the techniques may further comprise an instruction for determining, at the level of a reader provided on the hosting device, that the data sent in response to a second data request are contained. in the data buffer and a processing instruction of the second data request from the host device using data contained in the data buffer, wherein the processing of the second data request from the device host using data contained in the data buffer includes providing the host device with a burst-group list entry pointing to the memory in the data buffer containing the data provided in response . The present invention will now be described in more detail with reference to the corresponding embodiments as illustrated in the accompanying drawings. Although the present invention is described below with reference to the exemplary embodiments, it should be understood that it is not limited thereto. Those skilled in the art having access to the teachings of the present invention will recognize the additional embodiments, modifications and embodiments as well as the fields of use included in the scope of the present invention as described herein and with respect to which the present invention can be of significant use. Brief Description of the Drawings [0024] To facilitate a better understanding of the present invention, reference will now be made to the accompanying drawings in which like elements are referenced with like numbers. These drawings are not to be construed as limiting the present invention but are provided by way of example only. FIG. 1 illustrates a block example diagram illustrating a plurality of solid state devices in communication with a host device, according to an embodiment of the present invention. [0026] FIG. 2 illustrates an example of a module for caching the solid state device read request results, according to an embodiment of the present invention. FIG. 3 illustrates a flowchart illustrating the caching of the solid state device read request results, according to an embodiment of the present invention. [0028] FIG. 4 illustrates a flowchart illustrating the caching of the solid state device read request results according to one embodiment of the present invention. The present invention relates to the caching of solid state device read request results. Embodiments of the present invention provide systems and methods by which blocks retrieved in response to a read request, but not necessarily corresponding to a read request, may be cached. In a solid-state device (SSD) adapter with data compression, several logical address blocks (LBAs) can be grouped together to form a much larger unit. This unit can then be exploited through a compression engine provided on the adapter and allowing the LBA blocks to take up much less space than if they were stored in their natural size. This compressed data segment is then stored on the SSD (for example on a NAND flash memory). The size reduction can be 50% or more. This means that multiple LBAs can be stored as a single unit in the flash memory. When accessing these LBAs for reading, the compressed data segment must be retrieved from the flash memory and uncompressed.
[0002] Of all uncompressed LBAs, only a single LBA is potentially needed to respond to the read request. The systems traditionally discard the uncompressed blocks not needed for the response of a read request. Embodiments of the present invention provide systems and methods for caching such additional blocks. In some embodiments, such caching may be performed on an SSD. In one or more embodiments, such caching may be performed on a host device (for example, in embodiments based on an express non-volatile memory specification (NVMe)). Caching, whether on a host or SSD, improves sequential read performance. Caching using a host's memory can free up space in a PCIe adapter to store blocks for future access. The caching in the memory associated with a host can also use the large memory provided in the host to speculatively memorize readings. This provides faster access to a webhost for speculative reads and can improve the performance of a PCIe-based SSD adapter. Referring now to the drawings, Fig. 1 is an exemplary block diagram illustrating a solid state device in communication with a host device, in accordance with an embodiment of the present invention. Figure 1 includes a number of computing technologies such as a host system 102, a host CPU 104, and a PCI-Express root complex 106 containing the reader 150. The PCI-Express switch 108 can couple in communicating a plurality of targets (e.g., solid state devices such as NVMe-based targets) such as targets 110, 116, and 122 to the host system 102 via the PCI-Express 106 root complex. [0031] The target 110 may contain the NVMe 112 controller and the non-volatile memory 114. The target 116 may contain the NVMe controller 118 and the non-volatile memory 120. The target 122 may contain the NVMe controller 124 and the non-volatile memory 126. The system memory 128 may contain the memory-based resources accessible to the host system 102 via a memory interface (e.g., a dual data rate memory of type three to synchronous dynamic random access (DDR3 SDRAM)). The system memory 128 may take any suitable form, such as, but not limited to, a solid state memory (e.g., a flash memory or a solid state device (SSD)), an optical memory, and a memory device. magnetic memory. Even though system memory 128 is preferably nonvolatile, volatile memory may also be used. As illustrated in Figure 1, the system memory 128 may contain one or more data structures such as, for example, data buffers 138. The connection 142 between the root complex of PCI-Express 106 and the PCI-Express switch 108 may be, for example, a PCI-Express based interface. The connections 144, 146, and 148 may also be PCI-Express based interfaces. Even if only the connections 144, 146, and 148 are illustrated, it will be appreciated that the number of targets connected to the PCI-Express switch 108 may be smaller or significantly higher (e.g., 96 devices). Since the number of targets attached to the PCIExpress switch 108 increases, the bandwidth at the connection 142 may become a mandatory waypoint. According to some embodiments, it is possible to use interface standards other than PCIe for one or more parts including, but not limited to, serial advanced data transfer (SATA) technology, the Advanced Data Transfer Technology (ATA), Small Computer System Interface (SCSI), Extended PCI Interconnect (PCI-X), Fiber Channel, Serial Attached SCSI Interface (SAS), Secure Digital System (SD) ), an embedded multimedia memory card (EMMC) and a universal flash memory (UFS). The host system 102 may take any suitable form, such as, without limitation, a corporate server, a database host, a workstation, a personal computer, a mobile phone a gaming device, a personal digital assistant (PDA), an email / SMS messaging device, a digital camera, a digital game console (eg MP3), a GPS navigation device and a television system. The hosting system 102 and the target device may comprise additional components not shown in Figure 1 to simplify the drawing. In addition, all of the components are not illustrated in some embodiments. The various control devices, blocks and interfaces may further be implemented in any suitable manner. For example, a controller may take the form of one or more of a microprocessor or processor and a computer-readable medium storing computer-readable programming code (eg, software or firmware) that can be executed by the computer. (micro) processor, logic gates, switches, application specific integrated circuit (ASIC), programmable logic controller, and embedded microcontroller, for example. In a SSD adapter with data compression, several LBA blocks are grouped together to form a much larger unit. This unit can then operate through a compression engine on an SSD adapter allowing the LBA blocks to take up less space than if they were stored in their natural size. This data segment can then be stored on the SSD. The size reduction can be 50% or more. This means that multiple LBAs can be stored as a single unit on an SSD. When accessing these LBAs for reading, the data segment must be retrieved from the SSD and uncompressed. This means that all LBAs have been decompressed but only a single LBA is potentially needed to respond to the read request. The other LBA blocks can be brought into the SSD in the RAM responding to future requests. Embodiments of the present invention can improve SSD read performance using other LBAs recovered during normal processing of a read request on a compression-based SSD but are not necessary to answer the read request. These LBAs can be stored (eg buffered) on an SSD adapter, on a host memory, or in any other accessible location. In PCIe embodiments, a high bandwidth PCIe bus and the availability of a host memory can facilitate the storage of these LBAs on a host-based memory. This speculative caching of LBAs read on a host system can be used to increase performance in sequential or random read scenarios. When the host requests cached LBAs, the adapter can respond with an SGL (break-up list) entry pointing to the intended memory on the host machine. For example, a reader provided on an adapter may contain instructions recommending to take into account read requests from a host. If a read request is received, a reader can determine the presence or absence of one or more blocks needed to respond to the read request in a buffer. If it is determined that one or more blocks needed to respond to a read request are in a buffer, a reader may provide a split / group list (SGL) element pointing to the memory location of the LBA block. buffered appropriately. The host can recover the LBA block as if the SSD had direct access (DMA'd) to the memory. This can result in very low latency playback that can greatly increase performance. According to some embodiments, the software may reside outside of a reader that can buffer and monitor read requests. In one or more embodiments, a custom command may be provided (eg a custom read command) to check the availability of data in a buffer before reading an SSD. The methods used for read requests using a buffer can be integrated into one or more standards (eg NVMe standards.) 10042] In some embodiments, the tarnpoin can be maintained by logging. For example, logging to an adapter can be used to ensure that the host's data is valid before redirecting the host to that data. A reader can monitor and log one or more write requests. Journal logging tracks whether or not a write request has invalidated data previously sent to a buffer. A reader may contain a data structure that can indicate blocks (for example LBAs in a buffer). If a write request detected by the reader corresponds to one of these blocks, the block contained in the buffer may be invalidated and / or discarded. For subsequent read requests corresponding to such a block, the adapter and / or the reader may have to re-access the data segment and resume the LBA block. In some embodiments, one or more algorithms may be used to maintain a data buffer. For example, a reader may use a less recently used algorithm to chronologically classify data coming out of the data buffer, an algorithm used less frequently to chronologically classify data coming out of the data buffer, and / or an algorithm for caching data. adaptive replacement to chronologically classify data coming out of the data buffer. FIG. 2 depicts an exemplary module for caching the solid state device read request results, according to an embodiment of the present invention. As illustrated in Fig. 3, the SSD read buffering module 210 may contain the block buffering module 212, the logging module 214, and the buffer management module 216. In one or more embodiments, the SSD read buffering module 210 may be implemented in a device reader (e.g., the reader 150 of Fig. 1) or on an operating system OS of host (e.g. Host CPU 104 of Figure 1). According to some embodiments, the SSD read buffering module 210 may be implemented in an SSD adapter (e.g. target 110, target 116, or target 122 of Fig. 1). . The block buffering module 212 can cache one or more blocks recovered and decompressed as part of a read request. Compressed LBAs can store 50% or more of blocks if these blocks are uncompressed. This means that multiple LBAs can be stored as a single unit in the flash memory. When accessing these LBAs for reading, the compressed data segment must be retrieved from the flash memory and uncompressed. Of all uncompressed LBAs, only a single LBA is potentially needed to respond to the read request. The block caching module 212 may buffer such additional blocks. In some embodiments, such buffering may be performed on an SSD device. In one or more embodiments, such buffering may be performed on a host device (for example, in embodiments based on an express non-volatile memory specification (NVMe)). The block buffering module 212 can monitor read requests from a host. If a read request is received, the block buffering module 212 may determine whether or not one or more blocks necessary to respond to the read request are present in a buffer. If it is determined that one or more blocks needed to respond to a read request are in a buffer, the block buffering module 212 may provide a burst / bundle list (SGL) element pointing to the memory location of the buffered LBA block. The host can recover the LBA block as if the SSD had direct access (DMA'd) to his memory. Buffering by the block caching module 212, either on a host or on an SSD, improves the sequential read performance. Buffering using a host's memory can free up space in a PCIe adapter to store blocks for future access. Buffering in the memory associated with a host may also use the host's greatness to speculatively memorize reads. This provides faster access to a webhost for speculative reads and can improve the performance of a PCIe-based SSD adapter. The logging module 214 can check if the data contained in the host is valid before redirecting the host to this data. The logging module 214 can monitor and log one or more write requests. The logging module 214 can track whether or not a write request has invalidated previously sent data to a buffer. The logging module 214 may contain a data structure that can indicate the blocks (eg LBAs in a buffer). If a write request corresponding to one of these blocks is detected by the logging module 214, the block contained in the buffer may be invalidated and / or discarded. For subsequent read requests corresponding to such a block, the logging module 214 may indicate that a reader may have to re-access the appropriate data segment and resume the required LBA block. The buffer management module 216 may use one or more algorithms to maintain a data buffer. For example, the buffer management module 216 may use, for example, an algorithm used less recently to chronologically classify the data coming out of the data buffer, an algorithm used less frequently to chronologically classify the data coming out of the data buffer and / or an adaptive replacement caching algorithm to chronologically classify the data coming out of the data buffer. The buffer management module 216 may accept one or more parameters indicating a buffer size, a preferred timing algorithm, one or more memory locations to be used to create a buffer or other configurable buffer parameter. FIG. 3 illustrates a flowchart illustrating the caching of the solid state device read request results, according to an embodiment of the present invention. Process 300 is however only an example. The process 300 may be modified, for example by adding, changing, removing or rearranging the steps. In step 302, the process can begin. In step 304, a data request can be received from a host. For example, a read request may be received by a controller of an SSD for one or more LBAs. In step 306, a compressed data segment containing a plurality of LBA blocks may be retrieved. In step 308, the data segment can be decompressed. The decompression can provide one or more LBA blocks responding to the read request of the host. The decompression may also provide one or more "additional" LBAs 10 that are not required by the host's read request. In step 310, one or more blocks of LBA data provided in response to the read request of the host can be sent back to the host (for example via a burst list - grouping). In step 312, it can be determined whether or not additional LBA blocks 15 are available. If additional LBA blocks are available, the method 300 may send the additional LBA blocks to a data buffer (e.g., in the host memory) in step 314. If all the uncompressed blocks sent in response to a request for the read have been sent to the host (i.e., all of them have been sent in response to the request), the method 300 may terminate at step 316. [0054] FIG. a flowchart illustrating the caching of the solid state device read request results, according to an embodiment of the present invention. Process 400 is however only an example. The process 400 can be modified, for example by adding, changing, removing or rearranging the steps. In step 402, the process 400 can begin. In step 404, one or more write requests may be logged. In some embodiments, only the write requests corresponding to the buffered blocks may be logged. In some embodiments, if the data is detected as erased in step 406, they may be removed from a buffer in step 408. In some embodiments, they may be marked as invalid. In one or more embodiments, a reader intercepting a read request may determine that this buffered data is invalid when reading a log and may not access this buffered data. Buffered data that has not been accessed can be marked as out of date with respect to the buffer. In step 410, the buffer or the cache can be maintained using one or more algorithms (for example an algorithm used less recently to chronologically classify the data coming out of the data buffer, an algorithm used less frequently to chronologically classify the data coming out of the data buffer and / or an adaptive replacement caching algorithm to chronologically classify the data coming out of the data buffer). In step 412 if it is determined that the data is out of date with respect to the buffer, they may be removed from the buffer (for example when additional data is buffered and extra space is needed). At step 416, a data request may be received from a host. For example, a SSD reader placed on a host can monitor read requests and can receive a SSD read request from the host. In step 418, the reader can determine whether or not the required data is in a buffer. If the required data is in a buffer, the method 400 may proceed to step 426. At step 426, the reader may send a burst-grouping list (SQL) containing one or more items pointing to the locations of memory of the LBA blocks stored in memory. If the data is not stored (or is invalid), the method 400 can proceed to step 420. In step 420, the SSD controller can retrieve the SSD data segment. appropriate to respond to the read request of the host. In step 422, the data segment can be decompressed. In step 424, the SSD may send the LBA data block sent in response to the host's read request. [006t] In step 428, the process 400 may terminate. Other embodiments are within the scope and spirit of the invention. For example, the operation described above can be implemented by means of software, hardware, firmware, cabling or combinations thereof. One or more computer processors operating in accordance with the instructions may perform the functions associated with caching the solid state device read request results according to the present invention described above. If so, it is within the scope of the present invention that such instructions may be stored on one or more memory media readable by a non-transient processor (e.g. a magnetic disk or any other memory medium). In addition, the module implementation functions can also be physically positioned in various locations, including being distributed such that portions of these functions are implemented at different physical locations. The present invention is not limited in scope to the specific embodiments described herein. In fact, various other embodiments and modifications of the present invention, in addition to those described herein, will be apparent to those skilled in the art from the foregoing description and accompanying drawings. Such other embodiments and such other modifications are therefore intended to be encompassed within the scope of the present invention. Moreover, although the present invention has been described herein in the context of a particular implementation in a particular environment for a particular purpose, one skilled in the art will recognize that its utility is not limited to this and that the The present invention can be beneficially implemented in any amount of environments for any number of purposes. Therefore, the claims set out above must be considered to encompass the full scope and spirit of the present invention described herein. - 16 -
权利要求:
Claims (4)
[0001]
CLAIMS: 1. A method of caching the solid state device read request results comprising: receiving, at a solid state device, a request for data from a host device coupled to a communication to the solid state device; recovering, using a solid state device controller, a compressed data segment from the solid state device in response to the data request; decompressing the compressed data segment; returning to the host device a block of the data segment in response to the data request; and caching one or more additional blocks of the data segment in a data buffer for subsequent read requests.
[0002]
The method of claim 1, wherein the data segment is indicated by a logical block address.
[0003]
The method of claim 1, wherein the data buffer is provided in the memory of the solid state device.
[0004]
The method of claim 1, wherein the data buffer is provided in the memory associated with the Express Peripheral Component Interconnect (PCIe) of the host device. -17-. The method of claim 1, further comprising: receiving a second request for data from the host device; determining that the data sent in response to the second data request is contained in the data buffer; and processing the second data request from the host device using data contained in the data buffer. The method of claim 5, wherein processing the second request for data from the host device using data contained in the data buffer includes providing the host device with a list entry. splitting - clustering pointing to the memory in the data buffer containing the data sent in response. The method of claim 5, wherein determining that the data sent in response to the second data request is contained in the data buffer is performed by a reader provided on the host device. The method of claim 5, wherein determining that the data sent in response to the second data request is contained in the data buffer is performed by the solid state device. 9. The method of claim 6, wherein the burst-grouping list comes from a reader provided on the hosting device. - 18 -. The method of claim 1, further comprising: logging one or more writes of the data on the solid state device; and determining, based on the logged write request (s), whether or not the data contained in the data buffer is valid. The method of claim 10, further comprising: receiving a second request for data from the host device; determining that the data sent in response to the second data request is contained in the data buffer; determining, based on one or more written write requests, that the data contained in the data buffer is not valid; recovering, with the aid of the solid state device controller, a second compressed data segment from the solid state device; Decompressing the second compressed data segment; and returning to the host device a block of the second data segment sent in response to the second data request. The method of claim 1, further comprising using an algorithm to maintain the data buffer. The method of claim 12, wherein the algorithm comprises at least one of a least recently used algorithm for chronologically classifying the data out of the data buffer, an algorithm used least frequently to chronologically classify the data. exiting the data buffer and an adaptive replacement caching algorithm to chronologically classify data output from the data buffer. The method of claim 1, wherein the host device comprises at least one of: an enterprise server, a database server, a workstation, and a computer. The method of claim 1, wherein the solid state device comprises an Express Peripheral Component Interconnect (PCIe) device. A computer program product consisting of a series of executable instructions on a computer, wherein the computer program product performs a process of caching the solid state device read request results; the computer program implementing the steps of: receiving, at a solid state device, a request for data from a host device coupled in communication with the solid state device; recovering, using a solid state device controller, a compressed data segment from the solid state device in response to the data request; decompressing the compressed data segment; returning, to the host device, a block of the data segment sent in response to the -20-data request; and caching one or more additional blocks of the data segment in a data buffer for subsequent read requests. A system for caching solid state device read request results, the system comprising: a host device; a first express peripheral component interconnect device (PCIe), wherein the first express peripheral component interconnect device (PCIe) comprises instructions stored in memory, the instructions comprising: an instruction for sending one or more data blocks; an uncompressed data segment in response to a first data request to a data buffer; and an Express Peripheral Component Interconnect (PCIe) switch 15 communicatively coupling the first PCIe device and the host device; wherein the host device includes instructions stored in memory, the instructions comprising: an instruction for determining whether or not the data sent in response to a second data request is contained in the data buffer; and an instruction for processing the second data request from the data contained in the data buffer based on a determination that the data sent in response to the second data request is contained in the data buffer. -21-. The system of claim 17, wherein the data buffer is provided in the memory of the solid state device. 19. The system of claim 17, wherein the data buffer is provided in the memory associated with the Express Peripheral Component Interconnect (PCIe) of the host device. The system of claim 17, further comprising: an instruction to determine, at a reader provided on the host device, that the data sent in response to a second data request is contained in the data buffer; and a processing instruction of the second data request from the host device using the data contained in the data buffer, wherein the processing of the second data request from the host device using the Data contained in the data buffer includes providing the host device with a burst-group list entry pointing to the memory in the data buffer containing the data sent in response. -22-
类似技术:
公开号 | 公开日 | 专利标题
US10884666B1|2021-01-05|Dynamic path selection in a storage network
FR3023030B1|2019-10-18|INVALIDATION DATA AREA FOR CACHE
US9710283B2|2017-07-18|System and method for pre-storing small data files into a page-cache and performing reading and writing to the page cache during booting
FR3033061A1|2016-08-26|
US9317367B2|2016-04-19|Re-distributing high usage data in a raid solid state drive array
US9218257B2|2015-12-22|Methods for managing failure of a solid state device in a caching storage
US10733100B2|2020-08-04|Method and apparatus for classifying and buffering write commands
CN106133707A|2016-11-16|Cache management
FR3020885A1|2015-11-13|
US20140115245A1|2014-04-24|Apparatus system and method for providing raw data in a level-two cache
US10496290B1|2019-12-03|Method and system for window-based churn handling in data cache
US20130179959A1|2013-07-11|Zero Token
US8868839B1|2014-10-21|Systems and methods for caching data blocks associated with frequently accessed files
CN103885859B|2017-09-26|It is a kind of to go fragment method and system based on global statistics
US10761759B1|2020-09-01|Deduplication of data in a storage device
CN106155583B|2019-12-03|The system and method for caching solid condition apparatus read requests result
US9384135B2|2016-07-05|System and method of caching hinted data
US20150193311A1|2015-07-09|Managing production data
US9804968B2|2017-10-31|Storage system and data writing method
US20170220464A1|2017-08-03|Efficiently managing encrypted data on a remote backup server
US20170060444A1|2017-03-02|Placing data within a storage device
US11176034B2|2021-11-16|System and method for inline tiering of write data
KR20160127448A|2016-11-04|Big data analysis relational database management system using high speed semiconductor storage device
Hollowell et al.2014|The effect of flashcache and bcache on I/O performance
US9372793B1|2016-06-21|System and method for predictive caching
同族专利:
公开号 | 公开日
US9990298B2|2018-06-05|
GB2528534B|2017-02-15|
US20150324134A1|2015-11-12|
GB2528534A|2016-01-27|
DE102015005817A1|2015-11-12|
FR3020885B1|2019-05-31|
GB201507569D0|2015-06-17|
DE102015005817B4|2020-10-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5481701A|1991-09-13|1996-01-02|Salient Software, Inc.|Method and apparatus for performing direct read of compressed data file|
US5778411A|1995-05-16|1998-07-07|Symbios, Inc.|Method for virtual to physical mapping in a mapped compressed virtual storage subsystem|
US5946276A|1996-11-15|1999-08-31|Rimage Corporation|Data flow management system for recordable media|
US6105080A|1997-12-30|2000-08-15|Lsi Logic Corporation|Host adapter DMA controller with automated host reply capability|
US6243767B1|1998-06-02|2001-06-05|Adaptec, Inc.|System for register partitioning in multi-tasking host adapters by assigning a register set and a unique identifier in each of a plurality of hardware modules|
US6889256B1|1999-06-11|2005-05-03|Microsoft Corporation|System and method for converting and reconverting between file system requests and access requests of a remote transfer protocol|
US8954654B2|2008-06-18|2015-02-10|Super Talent Technology, Corp.|Virtual memory device application/driver with dual-level interception for data-type splitting, meta-page grouping, and diversion of temp files to ramdisks for enhanced flash endurance|
US20010047473A1|2000-02-03|2001-11-29|Realtime Data, Llc|Systems and methods for computer initialization|
US20030191876A1|2000-02-03|2003-10-09|Fallon James J.|Data storewidth accelerator|
US6952797B1|2000-10-25|2005-10-04|Andy Kahn|Block-appended checksums|
SE0004839D0|2000-12-22|2000-12-22|Ericsson Telefon Ab L M|Method and communication apparatus in a communication system|
US6754735B2|2001-12-21|2004-06-22|Agere Systems Inc.|Single descriptor scatter gather data transfer to or from a host processor|
US6877059B2|2002-03-29|2005-04-05|Emc Corporation|Communications architecture for a high throughput storage processor|
US6795897B2|2002-05-15|2004-09-21|International Business Machines Corporation|Selective memory controller access path for directory caching|
JP4186602B2|2002-12-04|2008-11-26|株式会社日立製作所|Update data writing method using journal log|
US7493450B2|2003-04-14|2009-02-17|Hewlett-Packard Development Company, L.P.|Method of triggering read cache pre-fetch to increase host read throughput|
AT354130T|2003-05-26|2007-03-15|Koninkl Philips Electronics Nv|METHOD AND DEVICE FOR TRANSFERRING DATA BETWEEN A MAIN STORAGE AND A STORAGE DEVICE|
US6954450B2|2003-11-26|2005-10-11|Cisco Technology, Inc.|Method and apparatus to provide data streaming over a network connection in a wireless MAC processor|
US7522623B2|2004-09-01|2009-04-21|Qlogic, Corporation|Method and system for efficiently using buffer space|
US7392340B1|2005-03-21|2008-06-24|Western Digital Technologies, Inc.|Disk drive employing stream detection engine to enhance cache management policy|
US8387073B1|2007-03-23|2013-02-26|Qlogic, Corporation|Method and system for processing network packets|
US7996599B2|2007-04-25|2011-08-09|Apple Inc.|Command resequencing in memory operations|
US8615496B2|2007-10-19|2013-12-24|Apple Inc.|File system reliability using journaling on a storage medium|
TWI375953B|2008-02-21|2012-11-01|Phison Electronics Corp|Data reading method for flash memory, controller and system therof|
US7966455B2|2008-03-04|2011-06-21|International Business Machines Corporation|Memory compression implementation in a multi-node server system with directly attached processor memory|
US8429351B1|2008-03-28|2013-04-23|Emc Corporation|Techniques for determining an amount of data to prefetch|
US8307044B2|2008-08-28|2012-11-06|Netapp, Inc.|Circuits, systems, and methods to integrate storage virtualization in a storage controller|
US8645337B2|2009-04-30|2014-02-04|Oracle International Corporation|Storing compression units in relational tables|
US8321648B2|2009-10-26|2012-11-27|Netapp, Inc|Use of similarity hash to route data for improved deduplication in a storage server cluster|
US9189385B2|2010-03-22|2015-11-17|Seagate Technology Llc|Scalable data structures for control and management of non-volatile storage|
US8407428B2|2010-05-20|2013-03-26|Hicamp Systems, Inc.|Structured memory coprocessor|
US20120089781A1|2010-10-11|2012-04-12|Sandeep Ranade|Mechanism for retrieving compressed data from a storage cloud|
WO2012082792A2|2010-12-13|2012-06-21|Fusion-Io, Inc.|Apparatus, system, and method for auto-commit memory|
US9497466B2|2011-01-17|2016-11-15|Mediatek Inc.|Buffering apparatus for buffering multi-partition video/image bitstream and related method thereof|
WO2012116369A2|2011-02-25|2012-08-30|Fusion-Io, Inc.|Apparatus, system, and method for managing contents of a cache|
KR101720101B1|2011-03-18|2017-03-28|삼성전자주식회사|Writing method of writing data into memory system and writing method of memory systme|
US8966059B2|2011-04-06|2015-02-24|Microsoft Technology Licensing, Llc|Cached data detection|
KR101833416B1|2011-04-27|2018-04-13|시게이트 테크놀로지 엘엘씨|Method for reading data on storage medium and storage apparatus applying the same|
WO2013022915A1|2011-08-09|2013-02-14|Lsi Corporation|I/o device and computing host interoperation|
WO2013052562A1|2011-10-05|2013-04-11|Lsi Corporation|Self-journaling and hierarchical consistency for non-volatile storage|
US20130117744A1|2011-11-03|2013-05-09|Ocz Technology Group, Inc.|Methods and apparatus for providing hypervisor-level acceleration and virtualization services|
US8725939B1|2011-11-30|2014-05-13|Emc Corporation|System and method for improving cache performance|
US9251086B2|2012-01-24|2016-02-02|SanDisk Technologies, Inc.|Apparatus, system, and method for managing a cache|
US8687902B2|2012-03-29|2014-04-01|Intel Corporation|System, method, and computer program product for decompression of block compressed images|
US9235346B2|2012-05-04|2016-01-12|Avago Technologies General Ip Pte. Ltd.|Dynamic map pre-fetching for improved sequential reads of a solid-state media|
WO2013186828A1|2012-06-11|2013-12-19|株式会社 日立製作所|Computer system and control method|
WO2014049636A1|2012-09-25|2014-04-03|Hitachi, Ltd.|Storage apparatus and method of controlling the same|
US9400744B2|2012-10-30|2016-07-26|Mangstor, Inc.|Magnetic random access memory journal for multi-level cell flash memory|
US9274951B2|2013-05-31|2016-03-01|Altera Corporation|Cache memory controller for accelerated data transfer|
US20150074355A1|2013-09-12|2015-03-12|Lsi Corporation|Efficient caching of file system journals|
US9110786B2|2013-11-07|2015-08-18|Sandisk Technologies Inc.|Read operation prior to retrieval of scatter gather list|
US20160342545A1|2014-02-12|2016-11-24|Hitachi, Ltd.|Data memory device|
US9612833B2|2014-02-28|2017-04-04|Intel Corporation|Handling compressed data over distributed cache fabric|KR20180027898A|2016-09-07|2018-03-15|에스케이하이닉스 주식회사|Memory device and memory system having the same|
US10719462B2|2018-09-25|2020-07-21|Intel Corporation|Technologies for computational storage via offload kernel extensions|
CN111651396B|2020-04-26|2021-08-10|尧云科技有限公司|Optimized PCIEcomplete packet out-of-order management circuit implementation method|
法律状态:
2016-05-20| PLFP| Fee payment|Year of fee payment: 2 |
2017-04-13| PLFP| Fee payment|Year of fee payment: 3 |
2018-03-30| PLSC| Search report ready|Effective date: 20180330 |
2018-04-11| PLFP| Fee payment|Year of fee payment: 4 |
2019-04-10| PLFP| Fee payment|Year of fee payment: 5 |
2020-04-14| PLFP| Fee payment|Year of fee payment: 6 |
2020-04-24| TP| Transmission of property|Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., US Effective date: 20200319 |
2021-04-12| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
US14/275,468|US9990298B2|2014-05-12|2014-05-12|System and method for caching solid state device read request results|
US14275468|2014-05-12|
[返回顶部]